19 research outputs found

    Signatures of criticality arise in simple neural population models with correlations

    Full text link
    Large-scale recordings of neuronal activity make it possible to gain insights into the collective activity of neural ensembles. It has been hypothesized that neural populations might be optimized to operate at a 'thermodynamic critical point', and that this property has implications for information processing. Support for this notion has come from a series of studies which identified statistical signatures of criticality in the ensemble activity of retinal ganglion cells. What are the underlying mechanisms that give rise to these observations? Here we show that signatures of criticality arise even in simple feed-forward models of retinal population activity. In particular, they occur whenever neural population data exhibits correlations, and is randomly sub-sampled during data analysis. These results show that signatures of criticality are not necessarily indicative of an optimized coding strategy, and challenge the utility of analysis approaches based on equilibrium thermodynamics for understanding partially observed biological systems.Comment: 36 pages, LaTeX; added journal reference on page 1, added link to code repositor

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators—trained using model simulations—to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin–Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics

    Dyadic coping and its underlying neuroendocrine mechanisms - implications for stress regulation

    Get PDF
    Previous research suggests that neuroendocrine mechanisms underlie inter-individual stress coping in couples. The neuropeptide oxytocin (OT), while regulating stress-sensitive HPA-axis activity might be crucial in this process. The purpose of this study was to examine the impact of dyadic coping abilities and OT on HPA-axis outcomes and constructive behavior during couple conflict. We conducted a secondary analysis of our previous database (Ditzen et al., 2009), assessing the modulating role of dyadic coping and intranasal OT on couple conflict behavior. The data revealed a significant interaction effect of the dyadic coping by oneself score and OT on cortisol responses during couple conflict, suggesting that particularly individuals with low a priori dyadic coping benefit from OT in terms of dampened HPA-activity. The results are in line with previous research suggesting OT’s central role for stress regulation and prosocial behavior. Furthermore, an interaction with dyadic coping indicates adaptations in the sensitivity of the OT system during the individual attachment and relationship history. These data add to the evidence that the neuroendocrine attachment systems influence couple behavior. Future studies of neurobiological mechanisms underlying dyadic coping will be of high relevance for the development of prevention and intervention programs

    Detección de ácido pirazinoico como biomarcador de resistencia a pirazinamida en Mycobacterium tuberculosis mediante dos inmunoensayos empleando nanopartículas magnéticas, tmRNA y RpsA

    Get PDF
    Diseña un sistema que detecte al ácido pirazinoico (POA), el cual es producto del metabolismo de la bacteria y principio activo de la droga. Cuando una bacteria es sensible al tratamiento con pirazinamida, produce POA en los cultivos a determinado ratio, sin embargo, aquellas resistentes no producen o producen muy poco. Se logró ensamblar dos sistemas de detección, el Sistema I consistió en acoplar la proteína RpsA y su tmRNA a una nanopartícula magnética de estreptavidina a través del tmRNA biotinilado, mientras que el Sistema II, consistió en acoplar las mismas moléculas a una nanopartícula magnética de cobalto, pero a través de la cola de histidinas de la proteína RpsA. Estos sistemas fueron enfrentados de forma preliminar, a POA comercial. Como resultado se obtuvo que el primer sistema logró detectar hasta cinco picomoles del ácido, sin embargo, estos resultados no lograron ser reproducibles. El segundo sistema logró detectar hasta 750 picomoles del ácido, y se logró reproducir en dos ocasiones. Los sistemas fueron estables y demostraron estar ensamblados correctamente, sin embargo, la variabilidad en los resultados de detección de POA, aquí presentados, estarían indicando que la interacción entre el RpsA y el POA no es tan fuerte como se mencionó en estudios anteriores. El resultado de este trabajo estaría confirmando lo reportado en un último estudio publicado en Nature, el cual refuta la hipótesis de que el ácido pirazinoico se une a RpsA con gran afinidad.Innóvate PerúTesi

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 1

    Get PDF

    Deep Emulators for Differentiation, Forecasting, and Parametrization in Earth Science Simulators

    No full text
    To understand and predict large, complex, and chaotic systems, Earth scientists build simulators from physical laws. Simulators generalize better to new scenarios, require fewer tunable parameters, and are more interpretable than nonphysical deep learning, but procedures for obtaining their derivatives with respect to their inputs are often unavailable. These missing derivatives limit the application of many important tools for forecasting, model tuning, sensitivity analysis, or subgrid‐scale parametrization. Here, we propose to overcome this limitation with deep emulator networks that learn to calculate the missing derivatives. By training directly on simulation data without analyzing source code or equations, this approach supports simulators in any programming language on any hardware without specialized routines for each case. To demonstrate the effectiveness of our approach, we train emulators on complete or partial system states of the chaotic Lorenz‐96 simulator and evaluate the accuracy of their dynamics and derivatives as a function of integration time and training data set size. We further demonstrate that emulator‐derived derivatives enable accurate 4D‐Var data assimilation and closed‐loop training of parametrizations. These results provide a basis for further combining the parsimony and generality of physical models with the power and flexibility of machine learning.Plain Language Summary: Many Earth science simulators are implemented as monolithic programs that calculate changes in the state of a system over time. In many cases, using or improving these simulators also requires the derivatives of their outputs with respect to inputs, which describe how future states depend on past states. These derivatives can be difficult or costly to compute. Several recent studies have applied deep learning (DL) to simulation data to construct emulators of their dynamics. Here, we use the fact that DL models can be easily and automatically differentiated to obtain approximate derivatives of the original simulator and test this idea on a simple and common chaotic model of the atmosphere. We verify in several experiments that the emulator derivatives, which require neither additional training nor extensive postprocessing to obtain, can indeed be used as a valid substitute for the derivatives of the simulator.Key Points: Deep learning models trained on simulation data can learn the dynamics of Earth science simulators. Deep learning models also learn the input–output derivatives of the state‐update function, which are unavailable for many simulators. We show on Lorenz‐96 that these learned derivatives can be used directly for data assimilation and parametrization tuning

    Signatures of criticality arise from random subsampling in simple population models

    No full text
    <div><p>The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat—a measure of population statistics derived from thermodynamics—has been used to suggest that neural populations are optimized to operate at a “critical point”. However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect “signatures of criticality”, and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.</p></div
    corecore